Search Results: "lars"

24 April 2020

Mark Brown: Book club: Our Software Dependency Problem

A short while ago Daniel, Lars and I met to discuss Russ Cox s excellent essay Our Software Dependency Problem. This essay looks at software reuse in general, especially in the context of modern distribution methods like PyPI and NPM which make the whole process much more frictionless than traditional distribution methods used with languages like C. Possibly our biggest conclusion was that the essay is so eminently sensible that we mostly just talked about how much we agreed with it and how comprehensive it was, we particularly admired the clarity with which it explores how to evaluate the quality of free software projects. Next time we ll have to pick something more controversial to discuss!

7 April 2020

Shirish Agarwal: GMRT 2020 and lots of stories

First of all, congratulations to all those who got us 2022 Debconf, so we will finally have a debconf in India. There is of course, lot of work to be done between now and then. For those who would be looking forward to visit India and especially Kochi I would suggest you to hear this enriching tale
I am sorry I used youtube link but it is too good a podcast not to be shared. Those who don t want youtube can use the invidio.us link for the same as shared below. https://www.invidio.us/watch?v=BvjgKuKmnQ4 I am sure there are lot more details, questions, answers etc. but would direct them gently to Praveen, Shruti, Balasankar and the rest who are from Kochi to answer if you have any questions about that history.

National Science Day, GMRT 2020 First, as always, we are and were grateful to both NCRA as well as GMRT for taking such good care of us. Even though Akshat was not around, probably getting engaged, a few of us were there. About 6-7 from the Mozilla Nasik while the rest representing the foss community. Here is a small picture which commentrates the event
National Science Day, GMRT 2020
While there is and was a lot to share about the event. For e.g. Akshay had bought RPI- Zero as well as RPI-2 (Raspberry Pi s ) and showed some things. He had also bought up a Debian stable live drive with persistence although the glare from the sun was too much that we couldn t show it to clearly to students. This was also the case with RPI but still we shared what and how much we could. Maybe next year, we either ask them to have double screens or give us dark room so we can showcase things much better. We did try playing with contrast and all but it didn t have much of an effect  . Of course in another stall few students had used RPI s as part of their projects so at times we did tell some of the newbies to go to those stalls and see and ask about those projects so they would have a much wider experience of things. The Mozilla people were pushing VR as well as Mozilla lite the browser for the mobile. We also gossiped quite a bit. I shared about indicatelts , a third-party certificate extension although I dunno if I should file a wnpp about it or not. We didn t have a good experience of when I had put an RFP (Request for Package) which was accepted for an extension which had similar functionality which we later come to know was sharing the sites people were using the extension to call home and share both the URL and the IP Address they were using it from. Sadly, didn t leave a good taste in mouth

Delhi Riots One thing I have been disappointed with is the lack of general awareness about things especially in the youth. We have people who didn t know that for e.g. in the Delhi riots which happened recently the law and order (Police) lies with Home Minister of India, Amit Shah. This is perhaps the only capital in the world which has its own Chief Minister but doesn t have any say on its law and order. And this has been the case for last 70 years i.e. since independance. The closest I know so far is the UK but they too changed their tune in 2012. India and especially Delhi seems to be in a time-capsule which while being dysfunctional somehow is made to work. In many ways, it s three body or a body split into three personalities which often makes governance a messy issue but that probably is a topic for another day. In fact, scroll had written a beautiful editorial that full statehood for Delhi was not only Arvind Kejriwal s call (AAP) but also something that both BJP as well as Congress had asked in the past. In fact, nothing about the policing is in AAP s power. All salaries, postings, transfers of police personnel everything is done by the Home Ministry, so if any blame has to be given it has to be given to the Home Ministry for the same.

American Capitalism and Ventilators America had been having a history of high cost healthcare as can be seen in this edition of USA today from 2017 . The Affordable Care Act was signed as a law by President Obama in 2010 which Mr. Trump curtailed when he came into power couple of years back. An estimated 80,000 people died due to seasonal flu in 2018-19 . Similarly, anywhere between 24-63,000 have supposed to have died from Last October to February-March this year. Now if the richest country can t take care of their population which is 1/3rd of the population of this country while at the same time United States has thrice the area that India has. This I am sharing as seasonal flu also strikes the elderly as well as young children more than adults. So in one senses, the vulnerable groups overlap although from some of the recent stats, for Covid-19 even those who are 20+ are also vulnerable but that s another story altogether. If you see the CDC graph of the seasonal flu it is clear that American health experts knew about it. One another common factor which joins both the seasonal flu and covid is both need ventilators for the most serious cases. So, in 2007 it was decided that the number of ventilators needed to be ramped up, they had approximately 62k ventilators at that point in time all over U.S. The U.S. in 2010, asked for bids and got bid from a small californian company called Newport Medic Instruments. The price of the ventilators was approximately INR 700000 at 2010 prices, while Newport said they would be able to mass-produce at INR 200000 at 2010 prices. The company got the order and they started designing the model which needed to be certified by FDA. By 2011, they got the product ready when a big company called Covidgen bought Newport Medic and shutdown the project. This was shared in a press release in 2012. The whole story was broken by New York Times again, just a few days ago which highlighted how America s capitalism rough shod over public health and put people s life unnecessarily in jeopardy. If those new-age ventilators would have been a reality then not just U.S. but India and many other countries would have bought the ventilators as every county has same/similar needs but are unable to pay the high cost which in many cases would be passed on to their citizens either as price of service, or by raising taxes or a mixture of both with public being none the wiser. Due to dearth of ventilators and specialized people to operate it and space, there is possibility that many countries including India may have to make tough choices like Italian doctors had to make as to who to give ventilator to and have the mental and emotional guilt which would be associated with the choices made.

Some science coverage about diseases in wire and other publications Since Covid coverage broke out, the wire has been bringing various reports of India s handling of various epidemics, mysteries, some solved, some still remaining unsolved due to lack of interest or funding or both. The Nipah virus has been amply discussed in the movie Virus (2019) which I shared in the last blog post and how easily it could have been similar to Italy in Kerala. Thankfully, only 24 people including a nurse succumbed to that outbreak as shared in the movie. I had shared about Kerala nurses professionalism when I was in hospital couple of years back. It s no wonder that their understanding of hygeine and nursing procedures are a cut above the rest hence they are sought after not just in India but world-over including US and UK and the middle-east. Another study on respitory illness was bought to my attention by my friend Pavithran.

Possibility of extended lockdown in India There was talk in the media of extended lockdown or better put an environment is being created so that an extended lockdown can be done. This is probably in part due to a mathematical model and its derivatives shared by two Indian-origin Cambridge scholars who predict that a minimum 49 days lockdown may be necessary to flatten the covid curve about a week back.
Predictions of the outcome of the current 21-day lockdown (Source: Rajesh Singh, R. Adhikari, Cambridge University)
Alternative lockdown strategies suggested by the Cambridge model (Source: Rajesh Singh, R. Adhikari, Cambridge University)
India caving to US pressure on Hydroxychloroquine While there has been lot of speculation in U.S. about Hydroxychloroquine as the wonder cure, last night Mr. Trump threatened India in a response to a reporter that Mr. Modi has to say no for Hydroxychloroquine and there may be retaliations.
As shared before if youtube is not your cup you can see the same on invidio.us https://www.invidio.us/watch?v=YP-ewgoJPLw Now while there have been several instances in the past of U.S. trying to bully India, going all the way back to 1954. In fact, in recent memory, there were sanctions on India by US under Atal Bihari Vajpayee Government (BJP) 1998 but he didn t buckle under the pressure and now we see our current PM taking down our own notification from a day ago and not just sharing Hydroxychloroquine but also Paracetemol to other countries so it would look as if India is sharing with other countries. Keep in mind, that India, Brazil haven t seen eye to eye on trade agreements of late and Paracetemol prices have risen in India. The price rise has been because the API (Active Pharmaceutical Ingredients) for the same come from China where the supply chain will take time to be fixed and we would also have to open up, although should we, should we not is another question altogether. I talk about supply chains as lean supply chains were the talk since late 90 s when the Japanese introduced Just-in-time manufacturing which lead to lean supply chains as well as lot of outsourcing as consequence. Of course, the companies saved money but at the cost of flexibility and how this model was perhaps flawed was shared by a series of articles in Economist as early as 2004 when there were lot of shocks to that model and would be exaberated since then. There have been frequent shocks to these fragile ecosystem more since 2008 after the financial meltdown and this would put more companies out of business than ever before. The MSME sector in India had already been severely impacted first by demonetization and then by the horrendous implementation of GST whose cries can be heard from all sectors. Also the frequent changing of GST taxes has made markets jumpy and investors unsure. With judgements such as retrospective taxes, AGR (Adjusted Gross Revenue) etc. it made not only the international investors scared, but also domestic investors. The flight of the capital has been noticeable. This I had shared before when Indian Government shared about LRS report which it hasn t since then. In fact Outlook Business had an interesting article about it where incidentally it talked about localcircles, a community networking platform where you get to know of lot of things and whom I am also a member of. At the very end I apologize for not sharing the blog post before but then I was feeling down but then I m not the only one.

24 March 2020

Russ Allbery: Review: Lost in Math

Review: Lost in Math, by Sabine Hossenfelder
Publisher: Basic
Copyright: June 2018
ISBN: 0-465-09426-0
Format: Kindle
Pages: 248
Listening to experts argue can be one of the better ways to learn about a new field. It does require some basic orientation and grounding or can be confusing or, worse, wildly misleading, so some advance research or Internet searches are warranted. But it provides some interesting advantages over reading multiple popular introductions to a field. First, experts arguing with each other are more precise about their points of agreement and disagreement because they're trying to persuade someone who is well-informed. The points of agreement are often more informative than the points of disagreement, since they can provide a feel for what is uncontroversial among experts in the field. Second, internal arguments tend to be less starry-eyed. One of the purposes of popularizations of a field is to get the reader excited about it, and that can be fun to read. But to generate that excitement, the author has a tendency to smooth over disagreements and play up exciting but unproven ideas. Expert disagreements pull the cover off of the uncertainty and highlight the boundaries of what we know and how we know it. Lost in Math (subtitled How Beauty Leads Physics Astray) is not quite an argument between experts. That's hard to find in book form; most of the arguments in the scientific world happen in academic papers, and I rarely have the energy or attention span to read those. But it comes close. Hossenfelder is questioning the foundations of modern particle physics for the general public, but also for her fellow scientists. High-energy particle physics is facing a tricky challenge. We have a solid theory (the standard model) which explains nearly everything that we have currently observed. The remaining gaps are primarily at very large scales (dark matter and dark energy) or near phenomena that are extremely difficult to study (black holes). For everything else, the standard model predicts our subatomic world to an exceptionally high degree of accuracy. But physicists don't like the theory. The details of why are much of the topic of this book, but the short version is that the theory does not seem either elegant or beautiful. It relies on a large number of measured constants that seem to have no underlying explanation, which is contrary to a core aesthetic principle that physicists use to judge new theories. Accompanying this problem is another: New experiments in particle physics that may be able to confirm or disprove alternate theories that go beyond the standard model are exceptionally expensive. All of the easy experiments have been done. Building equipment that can probe beyond the standard model is incredibly expensive, and thus only a few of those experiments have been done. This leads to two issues: Particle physics has an overgrowth of theories (such as string theory) that are largely untethered from experiments and are not being tested and validated or disproved, and spending on new experiments is guided primarily by a sense of scientific aesthetics that may simply be incorrect. Enter Lost in Math. Hossenfelder's book picks up a thread of skepticism about string theory (and, in Hossenfelder's case, supersymmetry as well) that I previously read in Lee Smolin's The Trouble with Physics. But while Smolin's critique was primarily within the standard aesthetic and epistemological framework of particle physics, Hossenfelder is questioning that framework directly. Why should nature be beautiful? Why should constants be small? What if the universe does have a large number of free constants? And is the dislike of an extremely reliable theory on aesthetic grounds a good basis for guiding which experiments we fund?
Do you recall the temple of science, in which the foundations of physics are the bottommost level, and we try to break through to deeper understanding? As I've come to the end of my travels, I worry that the cracks we're seeing in the floor aren't really cracks at all but merely intricate patterns. We're digging in the wrong places.
Lost in Math will teach you a bit less about physics than Smolin's book, although there is some of that here. Smolin's book was about two-thirds physics and one-third sociology of science. Lost in Math is about two-thirds sociology and one-third physics. But that sociology is engrossing. It's obvious in retrospect, but I hadn't thought before about the practical effects of running out of unexplained data on a theoretical field, or about the transition from more data than we can explain to having to spend billions of dollars to acquire new data. And Hossenfelder takes direct aim at the human tendency to find aesthetically appealing patterns and unified explanations, and scores some palpable hits.
I went into physics because I don't understand human behavior. I went into physics because math tells it how it is. I liked the cleanliness, the unambiguous machinery, the command math has over nature. Two decades later, what prevents me from understanding physics is that I still don't understand human behavior. "We cannot give exact mathematical rules that define if a theory is attractive or not," says Gian Francesco Giudice. "However, it is surprising how the beauty and elegance of a theory are universally recognized by people from different cultures. When I tell you, 'Look, I have a new paper and my theory is beautiful,' I don't have to tell you the details of my theory; you will get why I'm excited. Right?" I don't get it. That's why I am talking to him. Why should the laws of nature care what I find beautiful? Such a connection between me and the universe seems very mystical, very romantic, very not me. But then Gian doesn't think that nature cares what I find beautiful, but what he finds beautiful.
The structure of this book is half tour of how physics judges which theories are worthy of investigation and half personal quest to decide whether physics has lost contact with reality. Hossenfelder approaches this second thread with multiple interviews of famous scientists in the field. She probes at their bases for preferring one theory over another, at how objective those preferences can or should be, and what it means for physics if they're wrong (as increasingly appears to be the case for supersymmetry). In so doing, she humanizes theory development in a way that I found fascinating. The drawback to reading about ongoing arguments is the lack of a conclusion. Lost in Math, unsurprisingly, does not provide an epiphany about the future direction of high-energy particle physics. Its conclusion, to the extent that it has one, is a plea to find a way to put particle physics back on firmer experimental footing and to avoid cognitive biases in theory development. Given the cost of experiments and the nature of humans, this is challenging. But I enjoyed reading this questioning, contrarian take, and I think it's valuable for understanding the limits, biases, and distortions at the edge of new theory development. Rating: 7 out of 10

23 March 2020

Bits from Debian: New Debian Developers and Maintainers (January and February 2020)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

20 March 2020

Molly de Blanc: Seven hundred words on Internet access

I wrote this a few months ago, and never published it. Here you go. In the summer of 2017, I biked from Boston, MA to Montreal, QC. I rode across Massachusetts, then up the New York/Vermont border, weaving between the two states over two days. I spent the night in Washington County, NY at a bed and breakfast that generously fed me dinner even though they weren t supposed to. One of the proprietors told me about his history as a physics teacher, and talked about volunteer work he was doing. He somewhat casually mentioned that in his town there isn t really internet access. At the time (at least) Washington County wasn t served by broadband companies. Instead, for $80 a month you could purchase a limited data package from a mobile phone company, and use that. A limited data package means limited access. This could mean no or limited internet in schools or libraries. This was not the first time I heard about failings of Internet penetration in the United States. When I first moved to Boston I was an intern at One Laptop Per Child. I spoke with someone interested in bringing internet access to their rural community in Maine. They had hope for mesh networks, linking computers together into a web of connectivity, bouncing signals from one machine to another in order to bring internet to everyone. Access to the Internet is a necessity. As I write this, 2020 is only weeks away, which brings our decennial, nationwide census. There had been discussions of making the census entirely online, but it was settled that people could fill it out online, by telephone, or via mail and that households can answer the questions on the internet or by phone in English and 12 Non-English languages. [1][2] This is important because a comprehensive census is important. A census provides, if nothing else, population and demographics information, which is used to assist in the disbursement of government funding and grants to geographic communities. Apportionment, or the redistribution of the 435 seats occupied by members of the House of Representatives, is done based on the population of a given state: more people, more seats. Researchers, students, and curious people use census data to carry out their work. Non-profits and activist organizations can better understand the populations they serve. As things like the Census increasingly move online, the availability of access becomes increasingly important. Some things are only available online including job applications, customer service assistance, and even education opportunities like courses, academic resources, and applications for grants, scholarships, and admissions. The Internet is also a necessary point of connection between people, and necessary for building our identities. Being acknowledged with their correct names and pronouns decreases the risk of depression and suicide among trans youths and one assumes adults as well. [3] Online spaces provide acknowledgment and recognition that is not being met in physical spaces and geographic communities. Internet access has been important to me in my own mental health struggles and understanding. My bipolar exhibits itself through long, crushing periods of depression during which I can do little more than wait for it to be over. I fill these quiet spaces by listening to podcasts and talking with my friends using apps like Signal to manage our communications. My story of continuous recovery includes a particularly gnarly episode of bulimia in 2015. I was only able to really acknowledge that I had a problems with food and purging, using both as opportunities to inflict violence onto myself, when reading Tumblr posts by people with eating disorders. This made it possible for me to talk about my purging with my therapist, my psychiatrist, and my doctor in order to modify my treatment plan in order to start getting help I need. All of these things are made possible by having reliable, fast access to the Internet. We can respond to our needs immediately, regardless of where we are. We can find or build the communities we need, and serve the ones we already live in, whether they re physical or exist purely as digital. [1]: https://census.lacounty.gov/census/ Accessed 29.11.2019
[2]: https://www.census.gov/library/stories/2019/03/one-year-out-census-bureau-on-track-for-2020-census.html Accessed 29.11.2019
[3]: https://news.utexas.edu/2018/03/30/name-use-matters-for-transgender-youths-mental-health/ Accessed 29.11.2019

12 November 2017

Lars Wirzenius: Unit and integration testing: an analogy with cars

A unit is a part of your program you can test in isolation. You write unit tests to test all aspects of it that you care about. If all your unit tests pass, you should know that your unit works well. Integration tests are for testing that when your various well-tested, high quality units are combined, integrated, they work together. Integration tests test the integration, not the individual units. You could think of building a car. Your units are the ball bearings, axles, wheels, brakes, etc. Your unit tests for the ball bearings might test, for example, that they can handle a billion rotations, at various temperatures, etc. Your integration test would assume the ball bearings work, and should instead test that the ball bearings are installed in the right way so that the car, as whole, can run a kilometers, and accelerate and brake every kilometer, uses only so much fuel, produces only so much pollution, and doesn't kill passengers in case of a crash.

19 October 2017

Steinar H. Gunderson: Introducing Narabu, part 2: Meet the GPU

Narabu is a new intraframe video codec. You may or may not want to read part 1 first. The GPU, despite being extremely more flexible than it was fifteen years ago, is still a very different beast from your CPU, and not all problems map well to it performance-wise. Thus, before designing a codec, it's useful to know what our platform looks like. A GPU has lots of special functionality for graphics (well, duh), but we'll be concentrating on the compute shader subset in this context, ie., we won't be drawing any polygons. Roughly, a GPU (as I understand it!) is built up about as follows: A GPU contains 1 20 cores; NVIDIA calls them SMs (shader multiprocessors), Intel calls them subslices. (Trivia: A typical mid-range Intel GPU contains two cores, and thus is designated GT2.) One such core usually runs the same program, although on different data; there are exceptions, but typically, if your program can't fill an entire core with parallelism, you're wasting energy. Each core, in addition to tons (thousands!) of registers, also has some shared memory (also called local memory sometimes, although that term is overloaded), typically 32 64 kB, which you can think of in two ways: Either as a sort-of explicit L1 cache, or as a way to communicate internally on a core. Shared memory is a limited, precious resource in many algorithms. Each core/SM/subslice contains about 8 execution units (Intel calls them EUs, NVIDIA/AMD calls them something else) and some memory access logic. These multiplex a bunch of threads (say, 32) and run in a round-robin-ish fashion. This means that a GPU can handle memory stalls much better than a typical CPU, since it has so many streams to pick from; even though each thread runs in-order, it can just kick off an operation and then go to the next thread while the previous one is working. Each execution unit has a bunch of ALUs (typically 16) and executes code in a SIMD fashion. NVIDIA calls these ALUs CUDA cores , AMD calls them stream processors . Unlike on CPU, this SIMD has full scatter/gather support (although sequential access, especially in certain patterns, is much more efficient than random access), lane enable/disable so it can work with conditional code, etc.. The typically fastest operation is a 32-bit float muladd; usually that's single-cycle. GPUs love 32-bit FP code. (In fact, in some GPU languages, you won't even have 8-, 16-bit or 64-bit types. This is annoying, but not the end of the world.) The vectorization is not exposed to the user in typical code (GLSL has some vector types, but they're usually just broken up into scalars, so that's a red herring), although in some programming languages you can get to swizzle the SIMD stuff internally to gain advantage of that (there's also schemes for broadcasting bits by voting etc.). However, it is crucially important to performance; if you have divergence within a warp, this means the GPU needs to execute both sides of the if. So less divergent code is good. Such a SIMD group is called a warp by NVIDIA (I don't know if the others have names for it). NVIDIA has SIMD/warp width always 32; AMD used to be 64 but is now 16. Intel supports 4 32 (the compiler will autoselect based on a bunch of factors), although 16 is the most common. The upshot of all of this is that you need massive amounts of parallelism to be able to get useful performance out of a CPU. A rule of thumb is that if you could have launched about a thousand threads for your problem on CPU, it's a good fit for a GPU, although this is of course just a guideline. There's a ton of APIs available to write compute shaders. There's CUDA (NVIDIA-only, but the dominant player), D3D compute (Windows-only, but multi-vendor), OpenCL (multi-vendor, but highly variable implementation quality), OpenGL compute shaders (all platforms except macOS, which has too old drivers), Metal (Apple-only) and probably some that I forgot. I've chosen to go for OpenGL compute shaders since I already use OpenGL shaders a lot, and this saves on interop issues. CUDA probably is more mature, but my laptop is Intel. :-) No matter which one you choose, the programming model looks very roughly like this pseudocode:
for (size_t workgroup_idx = 0; workgroup_idx < NUM_WORKGROUPS; ++workgroup_idx)     // in parallel over cores
        char shared_mem[REQUESTED_SHARED_MEM];  // private for each workgroup
        for (size_t local_idx = 0; local_idx < WORKGROUP_SIZE; ++local_idx)    // in parallel on each core
                main(workgroup_idx, local_idx, shared_mem);
         
 
except in reality, the indices will be split in x/y/z for your convenience (you control all six dimensions, of course), and if you haven't asked for too much shared memory, the driver can silently make larger workgroups if it helps increase parallelity (this is totally transparent to you). main() doesn't return anything, but you can do reads and writes as you wish; GPUs have large amounts of memory these days, and staggering amounts of memory bandwidth. Now for the bad part: Generally, you will have no debuggers, no way of logging and no real profilers (if you're lucky, you can get to know how long each compute shader invocation takes, but not what takes time within the shader itself). Especially the latter is maddening; the only real recourse you have is some timers, and then placing timer probes or trying to comment out sections of your code to see if something goes faster. If you don't get the answers you're looking for, forget printf you need to set up a separate buffer, write some numbers into it and pull that buffer down to the GPU. Profilers are an essential part of optimization, and I had really hoped the world would be more mature here by now. Even CUDA doesn't give you all that much insight sometimes I wonder if all of this is because GPU drivers and architectures are meant to be shrouded in mystery for competitiveness reasons, but I'm honestly not sure. So that's it for a crash course in GPU architecture. Next time, we'll start looking at the Narabu codec itself.

10 October 2017

Lars Wirzenius: Debian and the GDPR

GDPR is a new EU regulation for privacy. The name is short for "General Data Protection Regulation" and it covers all organisations that handle personal data of EU citizens and EU residents. It will become enforceable May 25, 2018 (Towel Day). This will affect Debian. I think it's time for Debian to start working on compliance, mainly because the GDPR requires sensible things. I'm not an expert on GDPR legislation, but here's my understanding of what we in Debian should do: There's more, but let's start with those. I think Debian has at least the following systems that will need to be reviewed with regards to the GDPR: There may be more; these are just off the top of my head. I expect that mostly Debian will be OK, but we can't just assume that.

8 October 2017

Michael Stapelberg: Debian stretch on the Raspberry Pi 3 (update)

I previously wrote about my Debian stretch preview image for the Raspberry Pi 3. Now, I m publishing an updated version, containing the following changes: A couple of issues remain, notably the lack of WiFi and bluetooth support (see wiki:RaspberryPi3 for details. Any help with fixing these issues is very welcome! As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/. To install the image, insert the SD card into your computer (I m assuming it s available as /dev/sdb) and copy the image onto it:
$ wget https://people.debian.org/~stapelberg/raspberrypi3/2017-10-08/2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ bunzip2 2017-10-08-raspberry-pi-3-buster-PREVIEW.img.bz2
$ sudo dd if=2017-10-08-raspberry-pi-3-buster-PREVIEW.img of=/dev/sdb bs=5M
If resolving client-supplied DHCP hostnames works in your network, you should be able to log into the Raspberry Pi 3 using SSH after booting it:
$ ssh root@rpi3
# Password is  raspberry 

2 October 2017

Lars Wirzenius: Attracting contributors to a new project

How do you attract contributors to a new free software project? I'm in the very early stages of a new personal project. It is irrelevant for this blog post what the new project actually is. Instead, I am thinking about the following question:
Do I want the project to be mainly for myself, and maybe a handful of others, or do I want to try to make it a more generally useful, possibly even a well-known, popular project? In other words, do I want to just solve a specific problem I have or try to solve it for a large group of people?
If it's a personal project, I'm all set. I can just start writing code. (In fact, I have.) If it's the latter, I'll need to attract contributions from others, and how do I do that? I asked that question on Twitter and Mastodon and got several suggestions. This is a summary of those, with some editorialising from me. I don't know if these things are all correct, or that they're enough to grow a successful, popular project. Karl Foger'l seminal book Producing Open Source Software should also be mentioned.

26 September 2017

Norbert Preining: Debian/TeX Live 2017.20170926-1

A full month or more has past since the last upload of TeX Live, so it was high time to prepare a new package. Nothing spectacular here I have to say, two small bugs fixed and the usual long list of updates and new packages. From the new packages I found fontloader-luaotfload and interesting project. Loading fonts via lua code in luatex is by now standard, and this package allows for experiments with newer/alternative font loaders. Another very interesting new-comer is pdfreview which lets you set pages of another PDF on a lined background and add notes to it, good for reviewing. Enjoy. New packages abnt, algobox, beilstein, bib2gls, cheatsheet, coelacanth, dijkstra, dynkin-diagrams, endofproofwd, fetchcls, fixjfm, fontloader-luaotfload, forms16be, hithesis, ifxptex, komacv-rg, ku-template, latex-refsheet, limecv, mensa-tex, multilang, na-box, notes-tex, octave, pdfreview, pst-poker, theatre, upzhkinsoku, witharrows. Updated packages 2up, acmart, acro, amsmath, animate, babel, babel-french, babel-hungarian, bangorcsthesis, beamer, beebe, biblatex-gost, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bpchem, bxjaprnind, bxjscls, bytefield, checkcites, chemmacros, chet, chickenize, complexity, curves, cweb, datetime2-german, e-french, epstopdf, eqparbox, esami, etoc, fbb, fithesis, fmtcount, fnspe, fontspec, genealogytree, glossaries, glossaries-extra, hvfloat, ifptex, invoice2, jfmutil, jlreq, jsclasses, koma-script, l3build, l3experimental, l3kernel, l3packages, latexindent, libertinust1math, luatexja, lwarp, markdown, mcf2graph, media9, nddiss, newpx, newtx, novel, numspell, ocgx2, philokalia, phfqit, placeat, platex, poemscol, powerdot, pst-barcode, pst-cie, pst-exa, pst-fit, pst-func, pst-geometrictools, pst-ode, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst-vehicle, pst2pdf, pstricks, pstricks-add, ptex-base, ptex-fonts, pxchfon, quran, randomlist, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdef, texinfo, texlive-docindex, texlive-scripts, tikzducks, tikzsymbols, tocloft, translations, updmap-map, uplatex, widetable, xepersian, xetexref, xint, xsim, zhlipsum.

27 August 2017

Andrew Cater: BBQ Cambridge 2017 - post 2

We were all up until about 0100 :) House full of folk talking about all sorts, a game of Mao. Garden full of people clustered round the barbeque or sitting chatting - I had a long chat about Debian, what it means and how it's often an easier world to deal with and move in than the world of work, office politics or whatever - being here is being at home.

Arguments in the kitchen over how far coffee "just happens" with the magic bean to cup machine, some folk are in the garden preparing for breakfast at noon.

I missed the significance of this week's date - the 26th anniversary of Linus' original announcement of Linux in 1991 fell on Friday. Probably the first user of Linux who installed it from scratch was Lars Wirzenius - who was here yesterday.

Debian's 24th birthday was just about ten days ago on 16th August, making it the second oldest distribution and I reckon I've been using it for twenty one of those years - I wouldn't change it for the world.

13 August 2017

Lars Wirzenius: Retiring Obnam

This is a difficult announcement to write. The summary is if you use Obnam you should switch to another backup program in the coming months. The first commit to Obnam's current code base is this:
commit 7eaf5a44534ffa7f9c0b9a4e9ee98d312f2fcb14
Author: Lars Wirzenius <liw@iki.fi>
Date:   Wed Sep 6 18:35:52 2006 +0300
    Initial commit.
It's followed by over 5200 more commits until the latest one, which is from yesterday. The NEWS file contains 58 releases. There are 20761 lines of Python, 15384 words in the English language manual, with translations in German and French. The yarn test suite, which is a kind of a manual, is another 13382 words in English and pseudo-English. That's a fair bit of code and prose. Not all of it mine, I've had help from some wonderful people. But most of it mine. I wrote all of that because backups were fun. It was pleasing to use my own program to guarantee the safety of my own data. The technical challenges of implmenting the kind of backup program I wanted were interesting, and solving interesting problems is a big part of why I am a programmer. Obnam has a kind user base. It's not a large user base: the Debian "popularity contest" service estimates it at around 500. But it's a user base that is kind and has treated me well. I have tried to reciprocate. Unfortunately, I have not had fun while developing Obnam for some time now. This has changed. A few years ago, I lived in Manchester, UK, and commuted by train to work. It was a short train ride, about 15 minutes. At times I would sit on the floor with my laptop on my knees, writing code or the manual. Back then Obnam was a lot of fun. I was excited, and enthusiastic. In the past two years or so, I've not been able to feel that excitement again. My native language, Finnish, has an expression describing unpleasant tasks: something is as much fun as drinking tar. That describes Obnam in recent years for me. Obnam has not turned out well, from a maintainability point of view. It seems that every time I try to fix something, I break something else. Usuaully what breaks is speed or memory use: Obnam gets slower or starts using even more memory. For several years now I've been working on a new repository format for Obnam, code names GREEN ALBATROSS. It was meant to solve Obnam's problems as far as extensibility, performance, and resource use were concerned. It seems to have failed. I'm afraid I've had enough. I'm going to retire Obnam as a project and as a program, and move on to doing something else, so I can feel excitement and pleasure again. After some careful thought, I fear that the maintainability problems of Obnam can realistically only be solved by a complete rewrite from scratch, and I'm not up to doing that. If you use Obnam, you should migrate to some other backup solution. Don't worry, you have until the end of the year. I will be around and I intend to fix any serious bugs in Obnam; in particular, security flaws. But you should start looking for a replacement sooner rather than later. I will be asking Obnam to be removed from the Debian unstable and testing branches. The next Debian release (buster, Debian 10) won't include Obnam. The Obnam mailing lists are kindly hosted by Daniel Silverstone, and they will remain, but later this year I will change them to be moderated. The Obnam git repository will remain. The web site will remain, but I will add a note that Obnam is no longer maintained. Other Obnam online resources may disappear. If you would like to take over the Obnam project, and try to resolve the various issues, please contact me to discuss that. Thank you, and may you never need to restore.

Enrico Zini: Consensually doing things together?

On 2017-08-06 I have a talk at DebConf17 in Montreal titled "Consensually doing things together?" (video). Here are the talk notes. Abstract At DebConf Heidelberg I talked about how Free Software has a lot to do about consensually doing things together. Is that always true, at least in Debian? I d like to explore what motivates one to start a project and what motivates one to keep maintaining it. What are the energy levels required to manage bits of Debian as the project keeps growing. How easy it is to say no. Whether we have roles in Debian that require irreplaceable heroes to keep them going. What could be done to make life easier for heroes, easy enough that mere mortals can help, or take their place. Unhappy is the community that needs heroes, and unhappy is the community that needs martyrs. I d like to try and make sure that now, or in the very near future, Debian is not such an unhappy community. Consensually doing things together I gave a talk in Heidelberg. Valhalla made stickers Debian France distributed many of them. There's one on my laptop. Which reminds me of what we ought to be doing. Of what we have a chance to do, if we play our cards right. I'm going to talk about relationships. Consensual relationships. Relationships in short. Nonconsensual relationships are usually called abuse. I like to see Debian as a relationship between multiple people. And I'd like it to be a consensual one. I'd like it not to be abuse. Consent From wikpedia:
In Canada "consent means the voluntary agreement of the complainant to engage in sexual activity" without abuse or exploitation of "trust, power or authority", coercion or threats.[7] Consent can also be revoked at any moment.[8] There are 3 pillars often included in the description of sexual consent, or "the way we let others know what we're up for, be it a good-night kiss or the moments leading up to sex." They are:
  • Knowing exactly what and how much I'm agreeing to
  • Expressing my intent to participate
  • Deciding freely and voluntarily to participate[20]
Saying "I've decided I won't do laundry anymore" when the other partner is tired, or busy doing things. Is different than saying "I've decided I won't do laundry anymore" when the other partner has a chance to say "why? tell me more" and take part in negotiation. Resources: Relationships Debian is the Universal Operating System. Debian is made and maintained by people. The long term health of debian is a consequence of the long term health of the relationship between Debian contributors. Debian doesn't need to be technically perfect, it needs to be socially healthy. Technical problems can be fixed by a healty community. graph showing relationship between avoidance, accomodation, compromise, competition, collaboration The Thomas-Kilmann Conflict Mode Instrument: source png. Motivations Quick poll: What are your motivations to be in a relationship? Which of those motivations are healthy/unhealthy? "Galadriel" (noun, by Francesca Ciceri): a task you have to do otherwise Sauron takes over Middle Earth See: http://blog.zouish.org/nonupdd/#/22/1 What motivates me to start a project or pick one up? What motivates me to keep maintaning a project? What motivates you? What's an example of a sustainable motivation? Is it really all consensual in Debian? Energy Energy that thing which is measured in spoons. The metaphore comes from people suffering with chronic health issues:
"Spoons" are a visual representation used as a unit of measure used to quantify how much energy a person has throughout a given day. Each activity requires a given number of spoons, which will only be replaced as the person "recharges" through rest. A person who runs out of spoons has no choice but to rest until their spoons are replenished.
For example, in Debian, I could spend: What is one person capable of doing? Have reasonable expectations, on others: Have reasonable expectations, on yourself: Debian is a shared responsibility When spoons are limited, what takes more energy tends not to get done As the project grows, project-wide tasks become harder Are they still humanly achievable? I don't want Debian to have positions that require hero-types to fill them Dictatorship of who has more spoons: Perfectionism You are in a relationship that is just perfect. All your friends look up to you. You give people relationship advice. You are safe in knowing that You Are Doing It Right. Then one day you have an argument in public. You don't just have to deal with the argument, but also with your reputation and self-perception shattering. One things I hate about Debian: consistent technical excellence. I don't want to be required to always be right. One of my favourite moments in the history of Debian is the openssl bug Debian doesn't need to be technically perfect, it needs to be socially healthy, technical problems can be fixed. I want to remove perfectionism from Debian: if we discover we've been wrong all the time in something important, it's not the end of Debian, it's the beginning of an improved Debian. Too good to be true There comes a point in most people's dating experience where one learns that when some things feel too good to be true, they might indeed be. There are people who cannot say no: There are people who cannot take a no: Note the diversity statement: it's not a problem to have one of those (and many other) tendencies, as long as one manages to keep interacting constructively with the rest of the community Also, it is important to be aware of these patterns, to be able to compensate for one's own tendencies. What happens when an avoidant person meets a narcissistic person, and they are both unaware of the risks? Resources: Note: there are problems with the way these resources are framed: Red flag / green flag http://pervocracy.blogspot.ca/2012/07/green-flags.html Ask for examples of red/green flags in Debian. Green flags: Red flags: Apologies / Dealing with issues I don't see the usefulness of apologies that are about accepting blame, or making a person stop complaining. I see apologies as opportunities to understand the problem I caused, help fix it, and possibly find ways of avoiding causing that problem again in the future. A Better Way to Say Sorry lists a 4 step process, which is basically what we do when in bug reports already: 1, Try to understand and reproduce the exact problem the person had. 2. Try to find the cause of the issue. 3. Try to find a solution for the issue. 4. Verify with the reporter that the solution does indeed fix the issue. This is just to say
My software ate
the files
that where in
your home directory and which
you were probably
needing
for work Forgive me
it was so quick to write
without tests
and it worked so well for me
(inspired by a 1934 poem by William Carlos Williams) Don't be afraid to fail Don't be afraid to fail or drop the ball. I think that anything that has a label attached of "if you don't do it, nobody will", shouldn't fall on anybody's shoulders and should be shared no matter what. Shared or dropped. Share the responsibility for a healthy relationship Don't expect that the more experienced mates will take care of everything. In a project with active people counted by the thousand, it's unlikely that harassment isn't happening. Is anyone writing anti-harassment? Do we have stats? Is having an email address and a CoC giving us a false sense of security?
When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop. If you cannot find any, or if the only thing you can find is people who say "it never happens here", consider whether you really want to be in that community.
(from http://www.enricozini.org/blog/2016/debian/you-ll-thank-me-later/)
There are some nice people in the world. I mean nice people, the sort I couldn t describe myself as. People who are friends with everyone, who are somehow never involved in any argument, who seem content to spend their time drawing pictures of bumblebees on flowers that make everyone happy. Those people are great to have around. You want to hold onto them as much as you can. But people only have so much tolerance for jerkiness, and really nice people often have less tolerance than the rest of us. The trouble with not ejecting a jerk whether their shenanigans are deliberate or incidental is that you allow the average jerkiness of the community to rise slightly. The higher it goes, the more likely it is that those really nice people will come around less often, or stop coming around at all. That, in turn, makes the average jerkiness rise even more, which teaches the original jerk that their behavior is acceptable and makes your community more appealing to other jerks. Meanwhile, more people at the nice end of the scale are drifting away.
(from https://eev.ee/blog/2016/07/22/on-a-technicality/) Give people freedom If someone tries something in Debian, try to acknowledge and accept their work. You can give feedback on what they are doing, and try not to stand in their way, unless what they are doing is actually hurting you. In that case, try to collaborate, so that you all can get what you need. It's ok if you don't like everything that they are doing. I personally don't care if people tell me I'm good when I do something, I perceive it a bit like "good boy" or "good dog". I rather prefer if people show an interest, say "that looks useful" or "how does it work?" or "what do you need to deploy this?" Acknowledge that I've done something. I don't care if it's especially liked, give me the freedom to keep doing it. Don't give me rewards, give me space and dignity. Rather than feeding my ego, feed by freedom, and feed my possibility to create.

5 August 2017

Lars Wirzenius: Enabling TRIM/DISCARD on Debian, ext4, luks, and lvm

I realised recently that my laptop isn't set up to send TRIM or DISCARD commands to its SSD. That means the SSD firmware has a harder time doing garbage collection (see whe linked Wikipedia page for more details.) After some searching, I found two articles by Christopher Smart: one, update. Those, plus some addition reading of documentation, and a little experimentation, allowed me to do this. Since the information is a bit scattered, here's the details, for Debian stretch, as much for my own memory as to make sure this is collected into one place. Note that it seems to be a possible information leak to TRIM encryped devices. I don't know the details, but if that bothers you, don't do it. I don't know of any harmful effects for enabling TRIM for everything, except the crypto bit above, so I wonder if it wouldn't make sense for the Debian installer to do this by default.

29 July 2017

Robert McQueen: Welcome, Flathub!

Alex Larsson talks about Flathub at GUADEC 2017At the Gtk+ hackfest in London earlier this year, we stole an afternoon from the toolkit folks (sorry!) to talk about Flatpak, and how we could establish a critical mass behind the Flatpak format. Bringing Linux container and sandboxing technology together with ostree, we ve got a technology which solves real world distribution, technical and security problems which have arguably held back the Linux desktop space and frustrated ISVs and app developers for nearly 20 years. The problem we need to solve, like any ecosystem, is one of users and developers without stuff you can easily get in Flatpak format, there won t be many users, and without many users, we won t have a strong or compelling incentive for developers to take their precious time to understand a new format and a new technology. As Alex Larsson said in his GUADEC talk yesterday: Decentralisation is good. Flatpak is a tool that is totally agnostic of who is publishing the software and where it comes from. For software freedom, that s an important thing because we want technology to empower users, rather than tell them what they can or can t do. Unfortunately, decentralisation makes for a terrible user experience. At present, the Flatpak webpage has a manually curated list of links to 10s of places where you can find different Flatpaks and add them to your system. You can t easily search and browse to find apps to try out so it s clear that if the current situation remains we re not going to be able to get a critical mass of users and developers around Flatpak. Enter Flathub. The idea is that by creating an obvious center of gravity for the Flatpak community to contribute and build their apps, users will have one place to go and find the best that the Linux app ecosystem has to offer. We can take care of the boring stuff like running a build service and empower Linux application developers to choose how and when their app gets out to their users. After the London hackfest we sketched out a minimum viable system Github, Buildbot and a few workers and got it going over the past few months, culminating in a mini-fundraiser to pay for the hosting of a production-ready setup. Thanks to the 20 individuals who supported our fundraiser, to Mythic Beasts who provided a server along with management, monitoring and heaps of bandwidth, and to Codethink and Scaleway who provide our ARM and Intel workers respectively. We inherit our core principles from the Flatpak project we want the Flatpak technology to succeed at alleviating the issues faced by app developers in targeting a diverse set of Linux platforms. None of this stops you from building and hosting your own Flatpak repos and we look forward to this being a wide and open playing field. We care about the success of the Linux desktop as a platform, so we are open to proprietary applications through Flatpak s extra data feature where the client machine downloads 3rd party binaries. They are correctly labeled as such in the AppStream, so will only be shown if you or your OS has configured GNOME Software to show you apps with proprietary licenses, respecting the user s preference. The new infrastructure is up and running and I put it into production on Thursday. We rebuilt the whole repository on the new system over the course of the week, signing everything with our new 4096-bit key stored on a Yubikey smartcard USB key. We have 66 apps at the moment, although Alex is working on bringing in the GNOME apps at present we hope those will be joined soon by the KDE apps, and Endless is planning to move over as many of our 3rd party Flatpaks as possible over the coming months. So, thanks again to Alex and the whole Flatpak community, and the individuals and the companies who supported making this a reality. You can add the repository and get downloading right away. Welcome to Flathub! Go forth and flatten  Flathub logo

19 July 2017

Lars Wirzenius: Dropping Yakking from Planet Debian

A couple of people objected to having Yakking on Planet Debian, so I've removed it.

13 July 2017

Lars Wirzenius: Adding Yakking to Planet Debian

In a case of blatant self-promotion, I am going to add the Yakking RSS feed to the Planet Debian aggregation. (But really because I think some of the readership of Planet Debian may be interested in the content.) Yakking is a group blog by a few friends aimed at new free software contributors. From the front page description:
Welcome to Yakking. This is a blog for topics relevant to someone new to free software development. We assume you are already familiar with computers, and are curious about participating in the production of free software. You don't need to be a programmer: software development requires a wide variety of skills, and you can be a valued core contributor to a project without being a programmer.
If anyone objects, please let me know.

8 July 2017

Daniel Silverstone: Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects. In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:
define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read
And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:
define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read
This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:
allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]
Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:
define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers
The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better. If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:
  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/$ user /. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy. To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is. This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.
  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano
If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system. Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

2 July 2017

Bits from Debian: New Debian Developers and Maintainers (May and June 2017)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

Next.

Previous.